Library-Centric Software Design LCSD'06

Libraries are central to all major scientific, engineering, and business areas, yet the design, implementation, and use of libraries are underdeveloped arts. This workshop is one of the first steps in the process of placing all aspects of libraries on a sound technical and scientific basis through research into fundamental issues and documentation of best practices.

I think this is an important subject. Take a look at the CFP and see if you can contribute something!

I think languages have some way to go in order to enable really good libraries. Among the topics mentioned in the CFP that are of particular interest are: Design of language facilities and tools in support of library definition and use; Extensibility, parameterization, and customization; Distribution of libraries;
Specification of libraries and their semantics.

Paul Vick: BASIC principles

Visual Basic is a general-purpose programming language that prioritizes ease of use. In particular, Visual Basic is designed with the following principles in mind...

The principles are enumerated here.

Yet another one of those "language philosophy" kind of things. While the list isn't surprising, I think it's good to try to be explicit about these things. This allows you to reflect on them, and allows others to criticize both the choice of principles and the way they are manifested in the language...

The Problem With Threads

Lee, E.A., "The Problem With Threads", IEEE Computer, vol. 36, no. 5, May 2006, pp. 33-42, available online as U.C. Berkeley EECS Department Technical Report UCB/EECS-2006-1

For concurrent programming to become mainstream, we must discard threads as a programming model. Nondeterminism should be judiciously and carefully introduced where needed, and it should be explicit in programs.

Many of the points about concurrency raised in this article will be familiar to LtU readers, particularly those who have any familiarity with CTM, but the article does provide a good summary of the issues. Beyond that, what I found interesting (especially from a PLT perspective) is Lee's contention that the emphasis on developing general-purpose languages that support concurrency is misplaced. Lee believes that a better approach is to develop what he calls "coordination languages", which focus on arranging sequential components written in conventional languages into some concurrent configuration (I suppose that piping in a Unix shell could be considered a limited coordination language). Quoting from the article:

"Coordination languages do introduce new syntax, but that syntax serves purposes that are orthogonal to those of established programming languages. Whereas a general-purpose concurrent language like Erlang or Ada has to include syntax for mundane operations such as arithmetic expressions, a coordination language need not specify anything more than coordination. Given this, the syntax can be noticeably distinct."

It's not immediately obvious to me that there's anything preventing a "coordination language" from being a well-defined subset of some more general language. Lee's key point seems to involve making coordination constructs syntactically distinct (e.g. block diagrams vs. text). Which, of course, raises some interesting questions about whether other important facets of a language (such as the language of type declarations) should also have strongly syntactically-distinct representations, and just how homogeneous (Lisp anyone?) the syntax of a language should be...

Jumbo Lambda Calculus

Two new papers by Paul Blain Levy, "Jumbo Lambda Calculus" and the extended version "Jumbo Connectives in Type Theory and Logic", are available on his web page. Part of the abstract:

We make an argument that, for any study involving computational effects such as divergence or continuations, the traditional syntax of simply typed lambda-calculus cannot be regarded as canonical, because standard arguments for canonicity rely on isomorphisms that may not exist in an effectful setting. To remedy this, we define a "jumbo lambda-calculus" that fuses the traditional connectives together into more general ones, so-called "jumbo connectives". We provide two pieces of evidence for our thesis that the jumbo formulation is advantageous.

(From the types list.)

The case for Semantic Analysis

How to statically ensure software reliability is aimed mostly at C programmers in the embedded space:

The term semantic analysis refers to a group of advanced static analysis techniques that enable users to detect runtime and/or dynamic errors in software applications.

There is a conspicuous silence in the article on how PLs might help the tools. And I can't help but think that the static semantic analysis amounts to Constraint Programming (propagate and branch).

Building Interpreters by Composing Monads

Building Interpreters by Composing Monads

We exhibit a set of functions coded in
Haskell that can be used as building blocks to construct
a variety of interpreters for Lisp-like languages. The
building blocks are joined merely through functional
composition. Each building block contributes code to
support a specific feature, such as numbers, continuations,
functions calls, or nondeterminism. The result of
composing some number of building blocks is a parser,
an interpreter, and a printer that support exactly the
expression forms and data types needed for the combined
set of features, and no more.
The data structures are organized as pseudomonads,
a generalization of monads that allows composition.
Functional composition of the building blocks implies
type composition of the relevant pseudomonads.

So actually it is about building interpreters by composing pseudomonads.

PS: I stumbled upon this paper while trying to factor an interpreter into a set of features (and yes, I tried to package them as monads).
After a day of fighting with undecidable instances and rigid type variables I gave up and started googling - well, I was trying to invent a wheel.
Any comments on how pseudomonads relate to arrows (and other generalizations of monads) are appreciated.

Block performance in Ruby

What I find most interesting here is that the difference between yield and "inline" is this negligible, which explains why Ruby users are so enamored with the feature.

That stated, the runtime wonk in me would love for some Ruby expert out there to explain the underlying reasons why yield is so much faster than Proc.call.

I can guess, but I'd rather be told.

Don Box does some experimenting...

MathLang

MathLang is next to what the name suggest: a mathematical language, also a framework for writing mathematical texts. It allows for more formalisation than the normal Common Mathematical Language does in such a way that one can check the correctness of the text (on some given level) but also convert it to even more stringent forms so that it can be checked by proof checkers such as (Mizar, Coq, PVS, etc.).

Computer Science Looks for a Remake

...there is a deep malaise in computer science these days. Professors bemoan falling enrollments, a decline in prestige and a lack of attention to real-world problems. But, paradoxically, they say the future of CS has never been brighter, both within the discipline and in fields that computer technology will increasingly influence. Computerworld's Gary Anthes recently asked six CS professors what lies ahead for the field.

This piece isn't directly related to programming languages (in fact, I think this is one of the problems with it), but it will interest many LtU regulars.

To make this more relevant for LtU, let me rephrase the question: What's the most important and interesting CS research at the moment, from a PL perspective?

HashCaml--an extension of the OCaml bytecode compiler with support for type-safe marshalling and related naming features.

HashCaml

Peter Sewell and crew follow up on their work on Acute:

In this paper we put these ideas into practice, describing the HashCaml extension to the OCaml bytecode compiler, which supports type-safe and abstraction-safe marshalling, together with related naming constructs. Our contribution is threefold: (1) We show how to define globally meaningful runtime type names for key OCaml type constructs that were not covered in our previous work, dealing with the generativity issues involved: user-defined variant and record types, substructures, functors, arbitrary ascription, separate compilation, and external C functions. (2) We support marshalling within polymorphic functions by type-passing, requiring us to build compositional runtime type names and revisit the OCaml relaxed value restriction. We show that with typed marshalling one must fall back to the SML97 value restriction. (3) We show how the above can be implemented with reasonable performance as an unintrusive modification to the existing OCaml language, implementation, and standard libraries. An alpha release of HashCaml, capable of bootstrapping itself, is available, along with an example type-safe distributed communication library written in the language.